Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(mobile): use efficient sync #8842

Merged
merged 9 commits into from
May 14, 2024
Merged

feat(mobile): use efficient sync #8842

merged 9 commits into from
May 14, 2024

Conversation

fyfrey
Copy link
Contributor

@fyfrey fyfrey commented Apr 16, 2024

follow-up for #8755

enable the use of the more efficient endpoints to perform sync operations.

only merge after #9189 has been in a release for 1-2 weeks.

Copy link

cloudflare-workers-and-pages bot commented Apr 16, 2024

Deploying immich with  Cloudflare Pages  Cloudflare Pages

Latest commit: db6a66b
Status: ✅  Deploy successful!
Preview URL: https://6dd0596e.immich.pages.dev
Branch Preview URL: https://feat-app-efficient-sync.immich.pages.dev

View logs

@fyfrey fyfrey marked this pull request as ready for review April 25, 2024 05:43
for (int i = 0;; i += chunkSize) {
DateTime? lastCreationDate;
String? lastId;
for (;;) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

while loop?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's identical to while(true). I can change it but it does not really matter I'd say :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see this a lot in embedded system code, not used to seeing it in higher level language 😝

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I personally use this for loops when I break out using other means.

While(true).. I use for endless tasks. Like a server listening indefinitely until it's killed etc.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be rewritten to a do while loop on the condition assets.length < chunkSize? I think that's the only break condition I see here. I concur that while (true) loops are odd and potentially dangerous. I also think for (;;) is less readable. I did not know what it meant until I read some of the other comments.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sadly, I can't put the break condition in the loop condition check: it would not even run the first pass of the loop - i can only to while(true) instead of for(;;). The loop has two break conditions that cannot be handled in the loop header

Copy link
Contributor

@martyfuhry martyfuhry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wish I could be more help, as I'm not tremendously familiar with the backup sync. But I'm happy to do any testing if you'd like me to, as well!

@alextran1502
Copy link
Contributor

alextran1502 commented Apr 26, 2024

I ran into this error on the server when running the PR

image

Here is the mobile error log
EA79A027-ADF0-404F-AF11-97A2F9DC7B91

@NicholasFlamy
Copy link
Member

I ran into this error on the server when running the PR

image

Here is the mobile error log EA79A027-ADF0-404F-AF11-97A2F9DC7B91

I would guess this is caused by this:

@fyfrey fyfrey marked this pull request as draft April 26, 2024 20:41
@fyfrey
Copy link
Contributor Author

fyfrey commented Apr 26, 2024

I ran into this error on the server when running the PR

It seems some more server changes are required for stacked assets (my test instance probably had no stacked assets at the time...). I converted this PR back to a draft and will work on a new PR for the required server changes.

@fyfrey fyfrey force-pushed the feat/app-efficient-sync branch from 71fc321 to be7865b Compare April 29, 2024 15:49
@fyfrey fyfrey marked this pull request as ready for review April 29, 2024 20:11
Copy link
Contributor

@jrasm91 jrasm91 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@alextran1502
Copy link
Contributor

alextran1502 commented Apr 30, 2024

Hi @fyfrey I tested this again and found a problem with rendering memory lane. Here is the error message

image

From my debugging, the API returned the correct data. Still, the call to get the asset from the database when building the model probably happened before the synchronization process finished, so it could not parse all data correctly, leading to the error.

Or, similar to previous symptom, not all assets are fetched

image

@fyfrey
Copy link
Contributor Author

fyfrey commented Apr 30, 2024

Hi @fyfrey I tested this again and found a problem with rendering memory lane. Here is the error message

Thanks @alextran1502 ! I've pushed a new commit to fix the bug in the memory lane. We were just lucky it did not occur previously..
Solution: Do not add a memory lane if assets are not yet in DB. Refresh memory lane provider after remote asset sync.

@alextran1502
Copy link
Contributor

Hi @fyfrey, the change doesn't entirely fix the issue. I have multiple memories on a day but only the first memory gets rendered, so I checked the database. On main, I see 69000 assets in Isar database, while on this branch, I only see 64000 assets, so the fetching mechanism somehow doesn't fetch all assets

@fyfrey
Copy link
Contributor Author

fyfrey commented Apr 30, 2024

Hi @fyfrey, the change doesn't entirely fix the issue. I have multiple memories on a day but only the first memory gets rendered, so I checked the database. On main, I see 69000 assets in Isar database, while on this branch, I only see 64000 assets, so the fetching mechanism somehow doesn't fetch all assets

Thank you for testing again @alextran1502 . Very interesting. I did not notice any sync discrepancy (asset count) with my initial server changes. In the newer fix PR (#9100) we changed it to use the common asset query builder. It turns out it is incompatible with the app with stacked assets as the app expects all assets individually while the server returns them in an array with of the parent asset. This is probably the same issue you encountered a while ago and reverted server refactoring changes in #7752.

made another server PR to fix it: #9189

to test: run server with #9189 and mobile with this PR... or locally merge one into the other.. simply copy the few changes from #9189 into your local branch of this PR :)

@alextran1502
Copy link
Contributor

alextran1502 commented May 1, 2024

I tried again with #9189 on my prod (FOR SCIENCE!!) and I still see missing memories. Checking the database again, now I see 68000 assets are fetched on this PR while still 69000 assets on main

@fyfrey
Copy link
Contributor Author

fyfrey commented May 1, 2024

Something must still be different between the old and the new server query.
We need some internal test instance that has all the edge cases like your prod instance does :-P

The old method also fetched archived assets for partners, maybe this accounts for the difference. But memories should not show archived assets.... I'll take another look at the code, or better the generated SQL.

Edit:
I've changed #9189 to also return archived assets for partners (just the legacy getAllAsset method). I hope this resolves the difference also for your prod instance.

@alextran1502 alextran1502 merged commit 116043b into main May 14, 2024
23 checks passed
@alextran1502 alextran1502 deleted the feat/app-efficient-sync branch May 14, 2024 15:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants